Random marginal agreement coefficients: rethinking the adjustment for chance when measuring agreement
نویسندگان
چکیده
منابع مشابه
Measuring diagnostic agreement.
Diagnostic agreement tests the reliability and concordance of diagnostic systems. The introduction of measures of agreement with reputations for baserate independence (e.g., Yule's Y and Q), and new studies occasioned by the publication of the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV; American Psychiatric Association, 1994) and the International Classification of D...
متن کاملMeasuring agreement between diagnostic devices.
There is growing interest in using portable monitoring for investigating patients with suspected sleep apnea. Research studies typically report portable monitoring results in comparison with the results of sleep laboratory-based polysomnography. A systematic review of this research has recently been completed by a joint working group of the American College of Chest Physicians, the American Tho...
متن کاملAccounting for Chance Agreement in Gesture Elicitation Studies
The level of agreement among participants is a key aspect of gesture elicitation studies, and it is typically quantified by means of agreement rates (AR). We show that this measure is problematic, as it does not account for chance agreement. The problem of chance agreement has been extensively discussed in a range of scientific fields in the context of inter-rater reliability studies. We review...
متن کاملInterrater Agreement of the Adjustment Scales for Children and Adolescents
Standardized behavior raring scales and checklists offer unobtrusive evaluations of students' behavior in natural social environments. This study investigated the interrater agreement of the Adjustment Scales for Chil~ dren and Adolescents (ASCA), a behavior rating scale used in school settings. Participants were 71 students enrolled in a variety of special programs who were rated by 29 observe...
متن کاملA chance-corrected measure of inter-annotator agreement for syntax
Following the works of Carletta (1996) and Artstein and Poesio (2008), there is an increasing consensus within the field that in order to properly gauge the reliability of an annotation effort, chance-corrected measures of inter-annotator agreement should be used. With this in mind, it is striking that virtually all evaluations of syntactic annotation efforts use uncorrected parser evaluation m...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Biostatistics
سال: 2004
ISSN: 1465-4644,1468-4357
DOI: 10.1093/biostatistics/kxh027